-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cd: create a gha that generate an index for each query in the src/sql folder #5
Conversation
Well done moving forward, I will review and help out tmrw. |
I will spend more time reviewing, but few comments for now:
In light of the reogranization of the repositories by @jcfr, I think we should revisit the original plan outlined in #2 (comment). In particular, as I understand it, there is no need anymore to attach index files as release artifacts. They can be uploaded directly to PyPI as part of the package. This would also make it unnecessary to create an issue Maybe we should have one workflow with manual trigger to generate results of running the query, as you did it - this will be useful to verify everything works as expected before preparing the package - and then another one that would be triggered on the creation of the release (as done in |
Suggestions:
|
This first version should package After that, we will add the |
Can you clarify what you mean by this? Effectively detect new version of what and in which package? |
The
Footnotes |
I see. We considered this approach before, but at this point I do not believe this is a good approach. First, IDC releases do not happen frequently - once every 1-3 months. So there is no significant burden to respond to IDC version updates manually. Second, new IDC releases will have new data or schema updates, that may break assumptions, or produce index that is too large, or I do not know what else that we can't anticipate. Because of this, it is safer to manually test the queries, and consider the result of the query before publishing. I would prefer to have:
|
@jcfr I was finally able to call the datamanager directly from cmake, cleaned up data manager and cd heavily. I realized I was very far from your vision about using cmake to handle index generation but am also slowly understanding how pyproject.toml, cmake, and latest successful run: https://github.com/vkt1414/idc-index-data/actions/runs/8366868268 |
One more update: After including the GCP authentication step, CI is also working on all combinations from python 3.8-.12 on latest mac, ubuntu, and windows. However pypi 3.10 on latest ubuntu did not work and logs did help to troubleshoot, so I disabled pypi-3.10 for now. |
For the reference, this PR was needed to fix the CI failure due to inability to access GHA secrets from the PR submitted from a fork of the repo: #7. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue with the current approach is that wheels are not reproducible and the content of the index file being generated depend on time at which they are generated.
For sake of "simplicity", you could move forward with this approach but it may be sensible to:
- document that the generated wheel include an index file corresponding to the state of the tables at the time the wheel was generated
- not upload any source distribution (
.tar.gz
) on PyPI. Indeed it would be misleading as rebuilding the source distribution would not allow to get a index file matching one provided in the wheel - revisit the versioning from being "number" based to be date based (e.g
YYYY.MM.DD.N
)
@jcfr this is a very good point. I did not consider that it is important for the wheel packaging process to be reproducible.
The current approach of using IDC data release version as the major version of the package is exactly the purpose of enabling reproducibility. I would argue that mapping the date to the IDC release version is not trivial. But if the user knows the IDC data version, this is all that is needed to make queries reproducible. Thinking about this a bit more, we could modify the queries to have, for example |
Indeed, that would be similar to the use of For convenience, supporting
Extracting the major version and map it to the name of the table is an option. For this to work, we would need to also customize the versioning with the following settings:
The "challenge" with relying only on the tag is that we would have no way of testing the generation of a wheel associated with a new version of the index by simply creating a pull request, indeed a new tag and release would have to be created first without if it actually work. To address this I would instead suggest to hard-code the major version in the |
4c2e6ef
to
ae34b46
Compare
If all of the solutions we discussed involve manually updating the version, can't we simply hardcode the idc version in the sql query itself? |
I have a draft of Before moving forward, we really need @fedorov to integrate #8 as none of the code in this pull request is being tested. |
@jcfr that PR is integrated now, please go ahead or let us know what we should do to fix this. |
every query in the src/sql folder. The gha will check if the version of IDC is out of date. If so, the queries will be updated, and run with bigquery to generate csv and parquet files in root directory. The cmake lists file is edited to pick up the csv and parquet files generated by bq and package them. Moreover gitignore is edited to have a pattern for ignoring service account files. Lastly, if queries are updated, a pull request will be created to update the queries to be in sync with latest idc release.
a helper python script is referred and run in gha to generate indices use static methods for functions that do not rely on initialization of a class sql queries are reverted back to use idc_current but explicit versioned queries are uploaded as gha artifacts pylint is configured to lint scripts folder as well
call the index generator from cmake lists started using a single function to generate indices instead of two taking boolean inputs whether or not to generate csv, parquet files. Currently hard coded to generate both formats and not fully using the boolean capabilities yet with cmake. Queries are no longer needed to be attached as gha artifacts
fix minor formatting errors
$<$<BOOL:IDC_INDEX_DATA_GENERATE_CSV_ARCHIVE>:${download_dir}/idc_index.csv.zip> | ||
$<$<BOOL:IDC_INDEX_DATA_GENERATE_PARQUET>:${download_dir}/idc_index.parquet> | ||
COMMAND python ${CMAKE_CURRENT_SOURCE_DIR}/scripts/python/idc_index_data_manager.py | ||
--generate-csv-archive |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this line be instead
$<$<BOOL:IDC_INDEX_DATA_GENERATE_CSV_ARCHIVE>:--generate-csv-archive
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I do not know if the boolean value for parquet is passed along to the index manager properly, as it seem to generate both csv and parquet although the latter is disabled in cmakelists.
This was from latest CD run:
https://github.com/ImagingDataCommons/idc-index-data/actions/runs/8425241913
@jcfr I recall when we met last time you mentioned that one could handle packaging with just python, and CMake is not critical. Maybe you can comment on that a bit. I do not want to derail the current development, but also I do not know if learning CMake is a worthwhile investment of effort for Vamsi. Just wanted to raise the idea if he could understand the alternatives better. |
|
||
add_custom_command( | ||
OUTPUT | ||
$<$<BOOL:IDC_INDEX_DATA_GENERATE_CSV_ARCHIVE>:${download_dir}/idc_index.csv.zip> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jcfr if we want to build this for growth from the start, we cannot hard-code the file names. The generator code will now produce CSV/Parquet for each of the queries in the sql folder, but any changes to the SQL queries file names or addition of new queries will break the code at the moment.
It would make more sense to use wildcards to package all CSV/Parquet files depending on what is configured with the flags. Vamsi is going to give it a try to replace with the wildcard, unless you have other thoughts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Vamsi is going to give it a try to replace with the wildcard
Using wildcard as such will not work as the target check the timestamp of the output (aka the filename) to decide if it should "re-build" or not.
Once this is integrated, I suggest you create an issue describing the type, name and size of index files, and how these related to queries for generating them (as well as which domain they relates) (e.g microscopy, ...)
Let's also not that with working CI and CD pipeline refactoring will be straightforward.
Ditto. A pure python plugin can be written to implement similar behavior, there is some hints in the "Discussions" section of one of our recent scikit-build meeting1 Given my limited experience with such Also worth noting that I expect the Footnotes |
For now, it suggest we create the branch |
Closing. This pull request is superseded by #9 |
This PR aims to address part of #2.
Manual trigger only for now
Take all of the queries in the queries folder and run them
create artifacts containing the result for each query saved as CSV and Parquet, files should be named consistently with the query file name and include IDC version in the file name by figuring out what idc_current maps to at the time the query is executed
to get the number of the latest version of IDC, list all of the dataset in bigquery-public-data project and get latest idc_v*
bigquery-public-data.idc_current.version_metadata
to get the latest idc releasecreate an issue that will include links to the artifacts generated by the GHA, title "[github-action] Index updates IDC v..." (something like that)
replace idc_current with the actual version in the query and save each of the queries as GHA artifact
@jcfr I'm still figuring out how cmake works and am not completely sure if this is the best way to package the indices. As we are not uploading the indices as gh release attachments, I removed the download by urls and instead tried to package the generated indices as part of the cd. I'm working on ci.yml too, but unable to figure out how to package the indices. Any guidance is much appreciated. https://github.com/vkt1414/idc-index-data/actions/runs/8288398907
@fedorov we may need to decide whether to keep both csv and parquet or choose one. In my opinion we should keep both and slowly phase out csv. Also, please let me know if this PR is satisfactory to the goals mentioned in Automate index updates and packaging #2
we will need to create a couple of secrets for this gha. A sample successful run is available here: https://github.com/vkt1414/idc-index-data/actions/runs/8289604309/job/22686311347